2,233 research outputs found

    Data Analytics in Higher Education: Key Concerns and Open Questions

    Get PDF
    “Big Data” and data analytics affect all of us. Data collection, analysis, and use on a large scale is an important and growing part of commerce, governance, communication, law enforcement, security, finance, medicine, and research. And the theme of this symposium, “Individual and Informational Privacy in the Age of Big Data,” is expansive; we could have long and fruitful discussions about practices, laws, and concerns in any of these domains. But a big part of the audience for this symposium is students and faculty in higher education institutions (HEIs), and the subject of this paper is data analytics in our own backyards. Higher education learning analytics (LA) is something that most of us involved in this symposium are familiar with. Students have encountered LA in their courses, in their interactions with their law school or with their undergraduate institutions, instructors use systems that collect information about their students, and administrators use information to help understand and steer their institutions. More importantly, though, data analytics in higher education is something that those of us participating in the symposium can actually control. Students can put pressure on administrators, and faculty often participate in university governance. Moreover, the systems in place in HEIs are more easily comprehensible to many of us because we work with them on a day-to-day basis. Students use systems as part of their course work, in their residences, in their libraries, and elsewhere. Faculty deploy course management systems (CMS) such as Desire2Learn, Moodle, Blackboard, and Canvas to structure their courses, and administrators use information gleaned from analytics systems to make operational decisions. If we (the participants in the symposium) indeed care about Individual and Informational Privacy in the Age of Big Data, the topic of this paper is a pretty good place to hone our thinking and put into practice our ideas

    Student Privacy in Learning Analytics: An Information Ethics Perspective

    Get PDF
    In recent years, educational institutions have started using the tools of commercial data analytics in higher education. By gathering information about students as they navigate campus information systems, learning analytics “uses analytic techniques to help target instructional, curricular, and support resources” to examine student learning behaviors and change students’ learning environments. As a result, the information educators and educational institutions have at their disposal is no longer demarcated by course content and assessments, and old boundaries between information used for assessment and information about how students live and work are blurring. Our goal in this paper is to provide a systematic discussion of the ways in which privacy and learning analytics conflict and to provide a framework for understanding those conflicts. We argue that there are five crucial issues about student privacy that we must address in order to ensure that whatever the laudable goals and gains of learning analytics, they are commensurate with respecting students’ privacy and associated rights, including (but not limited to) autonomy interests. First, we argue that we must distinguish among different entities with respect to whom students have, or lack, privacy. Second, we argue that we need clear criteria for what information may justifiably be collected in the name of learning analytics. Third, we need to address whether purported consequences of learning analytics (e.g., better learning outcomes) are justified and what the distributions of those consequences are. Fourth, we argue that regardless of how robust the benefits of learning analytics turn out to be, students have important autonomy interests in how information about them is collected. Finally, we argue that it is an open question whether the goods that justify higher education are advanced by learning analytics, or whether collection of information actually runs counter to those goods

    Frictional magnetodrag between spatially separated two-dimensional electron systems: Coulomb versus phonon mediated electron-electron interaction

    Full text link
    We study the frictional drag due to Coulomb and phonon mediated electron-electron interaction in a double layer electron system exposed to a perpendicular magnetic field. Within the random phase approximation we calculate the dispersion relation of the intra Landau level magnetoplasmons at finite temperatures and distinguish their contribution to the magnetodrag. We calculate the transresistivity ρDrag\rho_{Drag} as a function of magnetic field BB, temperature TT, and interlayer spacing Λ\Lambda for a matched electron density. For Λ=200\Lambda =200 nm we find that ρDrag\rho_{Drag} is solely due to phonon exchange and shows no double-peak structure as a function of BB. For Λ=30\Lambda =30 nm, ρDrag\rho_{Drag} shows the double-peak structure and is mainly due to Coulomb interaction. The value of ρDrag\rho_{Drag} is about 0.3 Ω\Omega at T=2 K and for the half-filled second Lanadau level, which is about 13 times larger than the value for Λ=200\Lambda =200 nm. At lower edge of the temperature interval from 0.1 to 8 K, ρDrag/T2\rho_{Drag}/ T^{2} remains finite for Λ=30\Lambda =30 nm while it tends to zero for Λ=200\Lambda =200 nm. Near the upper edge of this interval, ρDrag\rho_{Drag} for Λ=30\Lambda =30 nm is approximately linear in TT while for Λ=200\Lambda =200 nm it decreases slowly in TT. Therefore, the peak of ρDrag/T2\rho_{Drag}/ T^{2} is very sharp for Λ=200\Lambda =200 nm. This strikingly different magnetic field and temperature dependence of ρDrag\rho_{Drag} ascribe we mainly to the weak screening effect at large interlayer separations.Comment: replaced with revised versio

    The Temptation of Data-enabled Surveillance: Are Universities the Next Cautionary Tale?

    Get PDF
    There is increasing concern about “surveillance capitalism,” whereby for-profit companies generate value from data, while individuals are unable to resist (Zuboff 2019). Non-profits using data-enabled surveillance receive less attention. Higher education institutions (HEIs) have embraced data analytics, but the wide latitude that private, profit-oriented enterprises have to collect data is inappropriate. HEIs have a fiduciary relationship to students, not a narrowly transactional one (see Jones et al, forthcoming). They are responsible for facets of student life beyond education. In addition to classrooms, learning management systems, and libraries, HEIs manage dormitories, gyms, dining halls, health facilities, career advising, police departments, and student employment. HEIs collect and use student data in all of these domains, ostensibly to understand learner behaviors and contexts, improve learning outcomes, and increase institutional efficiency through “learning analytics” (LA). ID card swipes and Wi-Fi log-ins can track student location, class attendance, use of campus facilities, eating habits, and friend groups. Course management systems capture how students interact with readings, video lectures, and discussion boards. Application materials provide demographic information. These data are used to identify students needing support, predict enrollment demands, and target recruiting efforts. These are laudable aims. However, current LA practices may be inconsistent with HEIs’ fiduciary responsibilities. HEIs often justify LA as advancing student interests, but some projects advance primarily organizational welfare and institutional interests. Moreover, LA advances a narrow conception of student interests while discounting privacy and autonomy. Students are generally unaware of the information collected, do not provide meaningful consent, and express discomfort and resigned acceptance about HEI data practices, especially for non-academic data (see Jones et al. forthcoming). The breadth and depth of student information available, combined with their fiduciary responsibility, create a duty that HEIs exercise substantial restraint and rigorous evaluation in data collection and use

    Defining Early Positive Response to Psychotherapy: An Empirical Comparison Between Clinically Significant Change Criteria and Growth Mixture Modeling

    Get PDF
    Several different approaches have been applied to identify early positive change in response to psychotherapy so as to predict later treatment outcome and length as well as use this information for outcome monitoring and treatment planning. In this study, simple methods based on clinically significant change criteria and computationally demanding growth mixture modeling (GMM) are compared with regard to their overlap and uniqueness as well as their characteristics in terms of initial impairment, therapy outcome, and treatment length. The GMM approach identified a highly specific subgroup of early improving patients. These patients were characterized by higher average intake impairments and higher pre- to-posttreatment score differences. Although being more specific for the prediction of treatment success, GMM was much less sensitive than clinically significant and reliable change criteria. There were no differences between the groups with regard to treatment length. Because each of the approaches had specific advantages, results suggest a combination of both methods for practical use in routine outcome monitoring and treatment planning

    Towards personalized allocation of patients to therapists

    Get PDF
    Objective: Psychotherapy outcomes vary between therapists, but it is unclear how such information can be used for treatment planning or practice development. This proof-of-concept study aimed to develop a data-driven method to match patients to therapists. Method: We analyzed data from N = 4,849 patients who accessed cognitive–behavioral therapy in U.K. primary care services. The main outcome was posttreatment reliable and clinically significant improvement (RCSI) on the Patient Health Questionnaire–9 (PHQ-9) depression measure. Machine-learning analyses were applied in a training sample (N = 2,425 patients treated by 68 therapists in Year 1), including a chi-squared automatic interaction detector (CHAID) algorithm and a random forest (RF) algorithm. The predictive models were cross-validated in a statistically independent test sample (N = 2,424 patients treated by the same therapists in Year 2) and evaluated using odds ratios (ORs) adjusted for baseline depression severity. Results: We identified subgroups of therapists that were differentially effective for highly specific subgroups of patients, yielding 17 classes of patient-to-therapist matches. The overall base rate of RCSI in the sample was 40.4%, but this varied from 10.5% to 69.9% across classes. Cases classed by the prediction algorithms as expected responders in the test sample were ∌60% more likely to attain posttreatment RCSI compared with those classed as nonresponders (adjusted ORs = 1.59, 1.60; p < .001). Conclusions: Machine-learning approaches could help to improve treatment outcomes by enabling the strategic allocation of patients to therapists and therapists to supervisors
    • 

    corecore